62 research outputs found

    Genotype–phenotype mapping implications for genetic programming representation:commentary on “On the mapping of genotype to phenotype in evolutionary algorithms” by Peter A. Whigham, Grant Dick, and James Maclaurin

    Get PDF
    This comment refers to the article available at doi:10.1007/s10710-017-9288-x. Here we comment on the article “On the mapping of genotype to phenotype in evolutionary algorithms,” by Peter A. Whigham, Grant Dick, and James Maclaurin. The article reasons about analogies from molecular biology to evolutionary algorithms and discusses conditions for biological adaptations in the context of grammatical evolution, which provide a useful perspective to GP practitioners. However, the connection of the listed implications for GP is not sufficiently convincing for the reader . Therefore this commentary will (1) examine the proposed principles one by one, challenging the authors to provide more supporting evidence where felt that this was needed, and (2) propose a methodical way to GP practitioners to apply these principles when designing GP representations

    Improving the Tartarus problem as a benchmark in genetic programming

    Get PDF
    For empirical research on computer algorithms, it is essential to have a set of benchmark problems on which the relative performance of different methods and their applicability can be assessed. In the majority of computational research fields there are established sets of benchmark problems; however, the field of genetic programming lacks a similarly rigorously defined set of benchmarks. There is a strong interest within the genetic programming community to develop a suite of benchmarks. Following recent surveys [7], the desirable characteristics of a benchmark problem are now better defined. In this paper the Tartarus problem is proposed as a tunably difficult benchmark problem for use in Genetic Programming. The justification for this proposal is presented, together with guidance on its usage as a benchmark

    Modelling human preference in evolutionary art

    Get PDF
    Creative activities including arts are characteristic to humankind. Our understanding of creativity is limited, yet there is substantial research trying to mimic human creativity in artificial systems and in particular to produce systems that automatically evolve art appreciated by humans. We propose here to model human visual preference by a set of aesthetic measures identified through observation of human selection of images and then use these for automatic evolution of aesthetic images

    Using genetic algorithms in computer vision : registering images to 3D surface model

    Get PDF
    This paper shows a successful application of genetic algorithms in computer vision. We aim at building photorealistic 3D models of real-world objects by adding textural information to the geometry. In this paper we focus on the 2D-3D registration problem: given a 3D geometric model of an object, and optical images of the same object, we need to find the precise alignment of the 2D images to the 3D model. We generalise the photo-consistency approach of Clarkson et al. who assume calibrated cameras, thus only the pose of the object in the world needs to be estimated. Our method extends this approach to the case of uncalibrated cameras, when both intrinsic and extrinsic camera parameters are unknown. We formulate the problem as an optimisation and use a genetic algorithm to find a solution. We use semi-synthetic data to study the effects of different parameter settings on the registration. Additionally, experimental results on real data are presented to demonstrate the efficiency of the method

    Partially Lazy Classification of Cardiovascular Risk via Multi-way Graph Cut Optimization

    Get PDF
    Cardiovascular disease (CVD) is considered a leading cause of human mortality with rising trends worldwide. Therefore, early identification of seemingly healthy subjects at risk is a priority. For this purpose, we propose a novel classification algorithm that provides a sound individual risk prediction, based on a non-invasive assessment of retinal vascular function. so-called lazy classification methods offer reduced time complexity by saving model construction time and better adapting to newly available instances, when compared to well-known eager methodS. Lazy methods are widely used due to their simplicity and competitive performance. However, traditional lazy approaches are more vulnerable to noise and outliers, due to their full reliance on the instances' local neighbourhood for classification. In this work, a learning method based on Graph Cut Optimization called GCO mine is proposed, which considers both the local arrangements and the global structure of the data, resulting in improved performance relative to traditional lazy methodS. We compare GCO mine coupled with genetic algorithms (hGCO mine) with established lazy and eager algorithms to predict cardiovascular risk based on Retinal Vessel Analysis (RVA) data. The highest accuracy of 99.52% is achieved by hGCO mine. The performance of GCO mine is additionally demonstrated on 12 benchmark medical datasets from the UCI repository. In 8 out of 12 datasets, GCO mine outperforms its counterpartS. GCO mine is recommended for studies where new instances are expected to be acquired over time, as it saves model creation time and allows for better generalization compared to state of the art methodS

    Emergence in genetic programming:let's exploit it!

    Get PDF
    Banzhaf explores the concept of emergence and how and where it happens in genetic programming [1]. Here we consider the question: what shall we do with it? We argue that given our ultimate goal to produce genetic programming systems that solve new and difficult problems, we should take advantage of emergence to get closer to this goal

    Gaining insights into road traffic data through genetic improvement

    Get PDF
    We argue that Genetic Improvement can be successfully used for enhancing road traffc data mining. This would support the relevant decision makers with extending the existing network of devices that sense and control city traffc, with the end goal of improving vehicle Flow and reducing the frequency of road accidents. Our position results from a set of preliminary observations emerging from the analysis of open access road trafic data collected in real time by the Birmingham City Council
    corecore